Classification of Explainable Artificial Intelligence Methods through Their Output Formats

نویسندگان

چکیده

Machine and deep learning have proven their utility to generate data-driven models with high accuracy precision. However, non-linear, complex structures are often difficult interpret. Consequently, many scholars developed a plethora of methods explain functioning the logic inferences. This systematic review aimed organise these into hierarchical classification system that builds upon extends existing taxonomies by adding significant dimension—the output formats. The reviewed scientific papers were retrieved conducting an initial search on Google Scholar keywords “explainable artificial intelligence”; machine learning”; “interpretable learning”. A subsequent iterative was carried out checking bibliography articles. addition dimension explanation format makes proposed practical tool for scholars, supporting them select most suitable type problem at hand. Given wide variety challenges faced researchers, XAI provide several solutions meet requirements differ considerably between users, problems application fields intelligence (AI). task identifying appropriate can be daunting, thus need helps selection methods. work concludes critically limitations formats explanations providing recommendations possible future research directions how build more generally applicable method. Future should flexible enough posed widespread use AI in fields, new regulations.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Building Explainable Artificial Intelligence Systems

As artificial intelligence (AI) systems and behavior models in military simulations become increasingly complex, it has been difficult for users to understand the activities of computer-controlled entities. Prototype explanation systems have been added to simulators, but designers have not heeded the lessons learned from work in explaining expert system behavior. These new explanation systems a...

متن کامل

Explainable Artificial Intelligence for Training and Tutoring

This paper describes an Explainable Artificial Intelligence (XAI) tool that allows entities to answer questions about their activities within a tactical simulation. We show how XAI can be used to provide more meaningful after-action reviews and discuss ongoing work to integrate an intelligent tutor into the XAI framework.

متن کامل

Explainable Artificial Intelligence via Bayesian Teaching

Modern machine learning methods are increasingly powerful and opaque. This opaqueness is a concern across a variety of domains in which algorithms are making important decisions that should be scrutable. The explainabilty of machine learning systems is therefore of increasing interest. We propose an explanation-byexamples approach that builds on our recent research in Bayesian teaching in which...

متن کامل

Automated Reasoning for Explainable Artificial Intelligence

Reasoning and learning have been considered fundamental features of intelligence ever since the dawn of the field of artificial intelligence, leading to the development of the research areas of automated reasoning and machine learning. This paper discusses the relationship between automated reasoning and machine learning, and more generally between automated reasoning and artificial intelligenc...

متن کامل

Classification of Big Data Through Artificial Intelligence

By technology innovations, there has been a large increase within the utilization of Bigdata knowledge, joined of the foremost most well-liked styles of media thanks to its content richness, for several vital applications. To sustain Associate in Nursing current ascension of knowledge Bigdata, there's Associate in Nursing rising demand for a complicated content-based knowledge classification sy...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Machine learning and knowledge extraction

سال: 2021

ISSN: ['2504-4990']

DOI: https://doi.org/10.3390/make3030032